In this paper we experiment with a 2-player strategy board game where playingmodels are evolved using reinforcement learning and neural networks. The modelsare evolved to speed up automatic game development based on human involvementat varying levels of sophistication and density when compared to fullyautonomous playing. The experimental results suggest a clear and measurableassociation between the ability to win games and the ability to do that fast,while at the same time demonstrating that there is a minimum level of humaninvolvement beyond which no learning really occurs.
展开▼